# Long Text Retrieval
M2 BERT 8k Retrieval Encoder V1
Apache-2.0
M2-BERT-8K is an 80-million-parameter long-context retrieval model based on the architecture proposed in the paper 'Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT'.
Large Language Model
Transformers English

M
hazyresearch
52
4
M2 BERT 2k Retrieval Encoder V1
Apache-2.0
80M-parameter M2-BERT-2k model checkpoint, specifically designed for long-context retrieval tasks, supporting a context length of 2048 tokens.
Text Embedding
Transformers English

M
hazyresearch
80
2
M2 Bert 80M 32k Retrieval
Apache-2.0
This is an 80M parameter M2-BERT pre-trained model, supporting sequences up to 32,768 tokens in length, specifically optimized for long-context retrieval tasks.
Text Embedding
Transformers English

M
togethercomputer
1,274
129
M2 Bert 80M 8k Retrieval
Apache-2.0
This is an 80-million-parameter M2-BERT pre-trained checkpoint with a sequence length of 8192, fine-tuned for long-context retrieval tasks.
Text Embedding
Transformers English

M
togethercomputer
198
33
Featured Recommended AI Models